KL-based Control of the Learning Schedule for Surrogate Black-Box Optimization
نویسندگان
چکیده
This paper investigates the control of an ML component within the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) devoted to black-box optimization. The known CMA-ES weakness is its sample complexity, the number of evaluations of the objective function needed to approximate the global optimum. This weakness is commonly addressed through surrogate optimization, learning an estimate of the objective function a.k.a. surrogate model, and replacing most evaluations of the true objective function with the (inexpensive) evaluation of the surrogate model. This paper presents a principled control of the learning schedule (when to relearn the surrogate model), based on the Kullback-Leibler divergence of the current search distribution and the training distribution of the former surrogate model. The experimental validation of the proposed approach shows significant performance gains on a comprehensive set of ill-conditioned benchmark problems, compared to the best state of the art including the quasi-Newton high-precision BFGS method. Mots-clef : expensive black-box optimization, evolutionary algorithms, surogate models, Kullback-Leibler divergence, CMA-ES.
منابع مشابه
Backpropagation through the Void: Optimizing control variates for black-box gradient estimation
Gradient-based optimization is the foundation of deep learning and reinforcement learning, but is difficult to apply when the mechanism being optimized is unknown or not differentiable. We introduce a general framework for learning low-variance, unbiased gradient estimators, applicable to black-box functions of discrete or continuous random variables. Our method uses gradients of a surrogate ne...
متن کاملPROGRESS: Progressive Reinforcement-Learning-Based Surrogate Selection
In most engineering problems, experiments for evaluating the performance of different setups are time consuming, expensive, or even both. Therefore, sequential experimental designs have become an indispensable technique for optimizing the objective functions of these problems. In this context, most of the problems can be considered as a black-box. Specifically, no function properties are known ...
متن کاملFunneled Bayesian Optimization for Design, Tuning and Control of Autonomous Systems
Bayesian optimization has become a fundamental global optimization algorithm in many problems where sample efficiency is of paramount importance. Recently, there has been proposed a large number of new applications in fields such as robotics, machine learning, experimental design, simulation, etc. In this paper, we focus on several problems that appear in robotics and autonomous systems: algori...
متن کاملScalable Hyperparameter Optimization with Products of Gaussian Process Experts
In machine learning, hyperparameter optimization is a challenging but necessary task that is usually approached in a computationally expensive manner such as grid-search. Out of this reason, surrogate based black-box optimization techniques such as sequential model-based optimization have been proposed which allow for a faster hyperparameter optimization. Recent research proposes to also integr...
متن کاملStochastic radial basis function algorithms for large-scale optimization involving expensive black-box objective and constraint functions
Optimization Involving Expensive Black-Box Objective and Constraint Functions Rommel G. Regis Mathematics Department, Saint Joseph’s University, Philadelphia, PA 19131, USA, [email protected] August 23, 2010 Abstract. This paper presents a new algorithm for derivative-free optimization of expensive black-box objective functions subject to expensive black-box inequality constraints. The proposed al...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1308.2655 شماره
صفحات -
تاریخ انتشار 2013